2025-11-13 09:19:00,333 [ 227130 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-11-13 09:19:00,333 [ 227130 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:79, check_args_and_update_paths) 2025-11-13 09:19:00,333 [ 227130 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:90, check_args_and_update_paths) 2025-11-13 09:19:00,333 [ 227130 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:92, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_s2h5km --privileged --dns-search='.' --memory=30709035008 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=5ccda723c1fc -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=d862517635bf -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e CLICKHOUSE_USE_OLD_ANALYZER=1 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=1 --color=no --durations=0 --report-log=parallel0_1.jsonl --report-log-exclude-logs-on-passed-tests test_allowed_client_hosts/test.py::test_denied_host test_backup_restore_on_cluster/test_cancel_backup.py::test_cancel_restore test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup -vvv " altinityinfra/integration-tests-runner:226bfaf75ac1 '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache Test order randomisation NOT enabled. Enable with --random-order or --random-order-bucket= rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: timeout-2.3.1, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0, random-order-1.1.1 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [3 items] scheduling tests via LoadFileScheduling test_allowed_client_hosts/test.py::test_denied_host test_backup_restore_on_cluster/test_cancel_backup.py::test_cancel_restore [gw3] [ 33%] PASSED test_allowed_client_hosts/test.py::test_denied_host [gw0] [ 66%] PASSED test_backup_restore_on_cluster/test_cancel_backup.py::test_cancel_restore test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup [gw0] [100%] FAILED test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup =================================== FAILURES =================================== _________________________ test_shutdown_cancels_backup _________________________ [gw0] linux -- Python 3.10.12 /usr/bin/python3 def test_shutdown_cancels_backup(): > with NoTrashChecker() as no_trash_checker: test_backup_restore_on_cluster/test_cancel_backup.py:556: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = type = None, value = None, traceback = None def __exit__(self, type, value, traceback): list_of_znodes = set( node1.query( "SELECT name FROM system.zookeeper WHERE path = '/clickhouse/backups' " + "AND NOT (name == 'alive_tracker')" ).splitlines() ) new_znodes = list_of_znodes.difference(self.__previous_list_of_znodes) if new_znodes: print(f"Found nodes in ZooKeeper: {new_znodes}") for node in new_znodes: print( f"Nodes in '/clickhouse/backups/{node}':\n" + node1.query( f"SELECT name FROM system.zookeeper WHERE path = '/clickhouse/backups/{node}'" ) ) print( f"Nodes in '/clickhouse/backups/{node}/stage':\n" + node1.query( f"SELECT name FROM system.zookeeper WHERE path = '/clickhouse/backups/{node}/stage'" ) ) if self.check_zookeeper: assert new_znodes == set() list_of_backups = set( os.listdir(os.path.join(node1.cluster.instances_dir, "backups")) ) new_backups = list_of_backups.difference(self.__previous_list_of_backups) unfinished_backups = set( backup for backup in new_backups if not os.path.exists( os.path.join(node1.cluster.instances_dir, "backups", backup, ".backup") ) ) new_backups = set( backup for backup in new_backups if backup not in unfinished_backups ) if new_backups: print(f"Found new backups: {new_backups}") if unfinished_backups: print(f"Found unfinished backups: {unfinished_backups}") assert new_backups == set(self.expect_backups) assert unfinished_backups.difference(self.allow_unfinished_backups) == set() all_errors = set() start_time = time.strftime( "%Y-%m-%d %H:%M:%S", self.__start_time_for_collecting_errors ) for node in nodes: errors_query_result = node.query( "SELECT name FROM system.errors WHERE last_error_time >= toDateTime('" + start_time + "') " + "AND NOT ((name == 'KEEPER_EXCEPTION') AND (last_error_message LIKE '%Fault injection%')) " + "AND NOT (name == 'NO_ELEMENTS_IN_CONFIG')" ) errors = errors_query_result.splitlines() if errors: print(f"{get_node_name(node)}: Found errors: {errors}") print( node.query( "SELECT name, last_error_message FROM system.errors WHERE last_error_time >= toDateTime('" + start_time + "')" ) ) for error in errors: > assert (error in self.expect_errors) or (error in self.allow_errors) E AssertionError: assert ('NETLINK_ERROR' in ['QUERY_WAS_CANCELLED'] or 'NETLINK_ERROR' in []) E + where ['QUERY_WAS_CANCELLED'] = .expect_errors E + and [] = .allow_errors test_backup_restore_on_cluster/test_cancel_backup.py:394: AssertionError ----------------------------- Captured stdout call ----------------------------- Using node1 as initiator Sleeping 2.9076550683477174 seconds Waiting for number of system processes = 1+ Got 1 system processes for backup 22dd202a8cf748d6bbacf7a1422cff2e after waiting 0 seconds node2: Restarting... node2: Restarted Waiting for number of system processes = 0 Got 0 system processes for backup 22dd202a8cf748d6bbacf7a1422cff2e after waiting 0 seconds node1: Found errors: ['QUERY_WAS_CANCELLED'] QUERY_WAS_CANCELLED Got error from host node2:9000. DB::Exception: Query was cancelled. Stack trace:\n\n0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x000000003833d451\n1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bd6da31\n2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000c3a520b\n3. DB::Exception::Exception<>(int, FormatStringHelperImpl<>) @ 0x000000000c3bf7f2\n4. ./build_docker/./src/Interpreters/ProcessList.cpp:567: DB::QueryStatus::throwQueryWasCancelled() const @ 0x000000002a6b1b82\n5. ./build_docker/./src/Interpreters/ProcessList.cpp:520: DB::QueryStatus::throwProperExceptionIfNeeded(unsigned long const&, unsigned long const&) @ 0x000000002a6b192b\n6. ./build_docker/./src/Interpreters/ProcessList.cpp:557: DB::QueryStatus::checkTimeLimit() @ 0x000000002a6b2917\n7. ./build_docker/./src/Backups/BackupsWorker.cpp:679: DB::BackupsWorker::writeBackupEntries(std::shared_ptr, std::vector>, std::allocator>>>&&, String const&, std::shared_ptr, bool, std::shared_ptr)::$_0::operator()() const @ 0x0000000027566f53\n8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000002158b37a\n9. ./src/Common/threadPoolCallbackRunner.h:178: DB::ThreadPoolCallbackRunnerLocal>, std::function>::operator()(std::function&&, Priority, std::optional)::\'lambda\'()::operator()() @ 0x000000002158ad9b\n10. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c03ce12\n11. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:117: ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImpl>::ThreadFromThreadPool::*)(), ThreadPoolImpl>::ThreadFromThreadPool*>(void (ThreadPoolImpl>::ThreadFromThreadPool::*&&)(), ThreadPoolImpl>::ThreadFromThreadPool*&&)::\'lambda\'()::operator()() @ 0x000000001c04ad63\n12. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c0375f1\n13. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:117: void* std::__thread_proxy[abi:ne190107]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>>(void*) @ 0x000000001c045a50\n14. asan_thread_start(void*) @ 0x000000000c357e77\n15. ? @ 0x00007fb79d251ac3\n16. ? @ 0x00007fb79d2e3850\n\nJob\'s origin stack trace:\n0. ./build_docker/./src/Common/StackTrace.cpp:386: StackTrace::StackTrace() @ 0x000000001bed40a7\n1. ./build_docker/./src/Common/ThreadPool.cpp:130: void boost::heap::priority_queue, boost::parameter::void_, boost::parameter::void_, boost::parameter::void_>::emplace, Priority&, StrongTypedef&, DB::OpenTelemetry::TracingContextOnThread const, bool&, (anonymous namespace)::ScopedDecrement>(std::function&&, Priority&, StrongTypedef&, DB::OpenTelemetry::TracingContextOnThread const&&, bool&, (anonymous namespace)::ScopedDecrement&&) @ 0x000000001c035ba4\n2. ./build_docker/./src/Common/ThreadPool.cpp:401: void ThreadPoolImpl>::scheduleImpl(std::function, Priority, std::optional, bool) @ 0x000000001c0408aa\n3. ./build_docker/./src/Common/ThreadPool.cpp:494: ThreadPoolImpl>::scheduleOrThrowOnError(std::function, Priority) @ 0x000000001c03fcd7\n4. ./src/Common/threadPoolCallbackRunner.h:188: DB::ThreadPoolCallbackRunnerLocal>, std::function>::operator()(std::function&&, Priority, std::optional) @ 0x00000000215894fb\n5. ./build_docker/./src/Backups/BackupsWorker.cpp:711: DB::BackupsWorker::writeBackupEntries(std::shared_ptr, std::vector>, std::allocator>>>&&, String const&, std::shared_ptr, bool, std::shared_ptr) @ 0x0000000027565e4b\n6. ./build_docker/./src/Backups/BackupsWorker.cpp:590: DB::BackupsWorker::doBackup(std::shared_ptr, std::shared_ptr const&, String const&, DB::BackupSettings const&, std::shared_ptr, std::shared_ptr, std::shared_ptr const&, bool, std::shared_ptr const&) @ 0x0000000027562a77\n7. ./build_docker/./src/Backups/BackupsWorker.cpp:418: DB::BackupsWorker::BackupStarter::doBackup() @ 0x000000002757dcd1\n8. ./build_docker/./src/Backups/BackupsWorker.cpp:488: void std::__function::__policy_invoker::__call_impl[abi:ne190107] const&, std::shared_ptr const&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x0000000027573133\n9. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000026b41cda\n10. ./contrib/llvm-project/libcxx/include/future:1589: std::packaged_task::operator()() @ 0x0000000026b422cc\n11. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c03ce12\n12. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:117: ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImpl>::ThreadFromThreadPool::*)(), ThreadPoolImpl>::ThreadFromThreadPool*>(void (ThreadPoolImpl>::ThreadFromThreadPool::*&&)(), ThreadPoolImpl>::ThreadFromThreadPool*&&)::\'lambda\'()::operator()() @ 0x000000001c04ad63\n13. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c0375f1\n14. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:117: void* std::__thread_proxy[abi:ne190107]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>>(void*) @ 0x000000001c045a50\n15. asan_thread_start(void*) @ 0x000000000c357e77\n16. ? @ 0x00007fb79d251ac3\n17. ? @ 0x00007fb79d2e3850\n\nJob\'s origin stack trace:\n0. ./build_docker/./src/Common/StackTrace.cpp:386: StackTrace::StackTrace() @ 0x000000001bed40a7\n1. ./build_docker/./src/Common/ThreadPool.cpp:130: void boost::heap::priority_queue, boost::parameter::void_, boost::parameter::void_, boost::parameter::void_>::emplace, Priority&, StrongTypedef&, DB::OpenTelemetry::TracingContextOnThread const, bool&, (anonymous namespace)::ScopedDecrement>(std::function&&, Priority&, StrongTypedef&, DB::OpenTelemetry::TracingContextOnThread const&&, bool&, (anonymous namespace)::ScopedDecrement&&) @ 0x000000001c035ba4\n2. ./build_docker/./src/Common/ThreadPool.cpp:401: void ThreadPoolImpl>::scheduleImpl(std::function, Priority, std::optional, bool) @ 0x000000001c0408aa\n3. ./build_docker/./src/Common/ThreadPool.cpp:494: ThreadPoolImpl>::scheduleOrThrowOnError(std::function, Priority) @ 0x000000001c03fcd7\n4. ./src/Common/threadPoolCallbackRunner.h:52: std::function (std::function&&, Priority)> DB::threadPoolCallbackRunnerUnsafe>(ThreadPoolImpl>&, String const&)::\'lambda\'(std::function&&, Priority)::operator()(std::function&&, Priority) @ 0x0000000026b406bc\n5. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:149: std::future std::__function::__policy_invoker (std::function&&, Priority)>::__call_impl[abi:ne190107] (std::function&&, Priority)> DB::threadPoolCallbackRunnerUnsafe>(ThreadPoolImpl>&, String const&)::\'lambda\'(std::function&&, Priority), std::future (std::function&&, Priority)>>(std::__function::__policy_storage const*, std::function&&, Priority&&) @ 0x0000000026b40214\n6. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000002755ecb2\n7. ./build_docker/./src/Backups/BackupsWorker.cpp:334: DB::BackupsWorker::start(std::shared_ptr const&, std::shared_ptr) @ 0x000000002755e7ea\n8. ./build_docker/./src/Interpreters/InterpreterBackupQuery.cpp:44: DB::InterpreterBackupQuery::execute() @ 0x000000002ad0b35e\n9. ./build_docker/./src/Interpreters/executeQuery.cpp:1457: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*, std::shared_ptr&) @ 0x000000002ac0a883\n10. ./build_docker/./src/Interpreters/executeQuery.cpp:1761: DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::shared_ptr, std::function, DB::QueryFlags, std::optional const&, std::function const&, std::optional const&)>) @ 0x000000002ac1221f\n11. ./build_docker/./src/Interpreters/DDLWorker.cpp:510: DB::DDLWorker::tryExecuteQuery(DB::DDLTaskBase&, std::shared_ptr const&, bool) @ 0x000000002979692f\n12. ./build_docker/./src/Interpreters/DDLWorker.cpp:675: DB::DDLWorker::processTask(DB::DDLTaskBase&, std::shared_ptr const&, bool) @ 0x00000000297925b9\n13. ./build_docker/./src/Interpreters/DDLWorker.cpp:453: DB::DDLWorker::scheduleTasks(bool) @ 0x000000002978dbd9\n14. ./build_docker/./src/Interpreters/DDLWorker.cpp:1203: DB::DDLWorker::runMainThread() @ 0x0000000029781dac\n15. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:117: ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImpl(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::\'lambda\'()::operator()() @ 0x00000000297b5543\n16. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c0375f1\n17. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:117: void* std::__thread_proxy[abi:ne190107]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>>(void*) @ 0x000000001c045a50\n18. asan_thread_start(void*) @ 0x000000000c357e77\n19. ? @ 0x00007fb79d251ac3\n20. ? @ 0x00007fb79d2e3850\n node2: Found errors: ['NETLINK_ERROR'] NO_ELEMENTS_IN_CONFIG Certificate file is not set. NETLINK_ERROR Can\'t receive Netlink response: error -2 ------------------------------ Captured log call ------------------------------- 2025-11-13 09:20:07.841000 [ 671 ] DEBUG : Executing query SELECT name FROM system.zookeeper WHERE path = '/clickhouse/backups' AND NOT (name == 'alive_tracker') on node1 (cluster.py:3648, query) 2025-11-13 09:20:08.057000 [ 671 ] DEBUG : Executing query CREATE TABLE tbl ON CLUSTER 'cluster' (x UInt64) ENGINE=ReplicatedMergeTree('/clickhouse/tables/tbl/', '{replica}') ORDER BY tuple() PARTITION BY x%10 on node1 (cluster.py:3648, query) 2025-11-13 09:20:08.424000 [ 671 ] DEBUG : Executing query INSERT INTO tbl SELECT number FROM numbers(10) on node1 (cluster.py:3648, query) 2025-11-13 09:20:08.790000 [ 671 ] DEBUG : Executing query BACKUP TABLE tbl ON CLUSTER 'cluster' TO Disk('backups', '22dd202a8cf748d6bbacf7a1422cff2e') SETTINGS id='22dd202a8cf748d6bbacf7a1422cff2e' ASYNC on node1 (cluster.py:3648, query) 2025-11-13 09:20:09.056000 [ 671 ] DEBUG : Executing query SELECT status FROM system.backups WHERE id='22dd202a8cf748d6bbacf7a1422cff2e' on node1 (cluster.py:3648, query) 2025-11-13 09:20:09.322000 [ 671 ] DEBUG : Executing query SELECT count() FROM system.processes WHERE (query_kind='Backup') AND (query LIKE '%22dd202a8cf748d6bbacf7a1422cff2e%') on node1 (cluster.py:3648, query) 2025-11-13 09:20:12.498000 [ 671 ] DEBUG : Executing query SELECT count() FROM system.processes WHERE (query_kind='Backup') AND (query LIKE '%22dd202a8cf748d6bbacf7a1422cff2e%') on node2 (cluster.py:3648, query) 2025-11-13 09:20:12.714000 [ 671 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw0-node2-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2051, exec_in_container) 2025-11-13 09:20:12.714000 [ 671 ] DEBUG : Command:[docker exec -u root roottestbackuprestoreonclustercancelbackup-gw0-node2-1 bash -c ps -C clickhouse] (cluster.py:121, run_and_check) 2025-11-13 09:20:12.755000 [ 671 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:145, run_and_check) 2025-11-13 09:20:12.755000 [ 671 ] DEBUG : Stdout: 8 ? 00:00:09 clickhouse (cluster.py:145, run_and_check) 2025-11-13 09:20:12.756000 [ 671 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw0-node2-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2051, exec_in_container) 2025-11-13 09:20:12.756000 [ 671 ] DEBUG : Command:[docker exec -u root roottestbackuprestoreonclustercancelbackup-gw0-node2-1 bash -c pkill clickhouse] (cluster.py:121, run_and_check) 2025-11-13 09:20:12.794000 [ 671 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw0-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-11-13 09:20:12.794000 [ 671 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw0-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-11-13 09:20:12.839000 [ 671 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-11-13 09:20:13.840000 [ 671 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw0-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-11-13 09:20:13.840000 [ 671 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw0-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-11-13 09:20:13.881000 [ 671 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-11-13 09:20:14.882000 [ 671 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw0-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-11-13 09:20:14.883000 [ 671 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw0-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-11-13 09:20:14.925000 [ 671 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-11-13 09:20:15.927000 [ 671 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw0-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-11-13 09:20:15.927000 [ 671 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw0-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-11-13 09:20:15.973000 [ 671 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-11-13 09:20:16.974000 [ 671 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw0-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-11-13 09:20:16.974000 [ 671 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw0-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-11-13 09:20:17.026000 [ 671 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw0-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-11-13 09:20:17.026000 [ 671 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw0-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-11-13 09:20:17.073000 [ 671 ] DEBUG : No clickhouse process running. Start new one. (cluster.py:4014, start_clickhouse) 2025-11-13 09:20:17.075000 [ 671 ] DEBUG : http://localhost:None "POST /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw0-node2-1/exec HTTP/1.1" 201 74 (connectionpool.py:547, _make_request) 2025-11-13 09:20:17.098000 [ 671 ] DEBUG : http://localhost:None "POST /v1.46/exec/4541bbfc91391d236c2c554af7edd645c1d4571abce70019db579f4df8a94ca3/start HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-11-13 09:20:17.100000 [ 671 ] DEBUG : http://localhost:None "GET /v1.46/exec/4541bbfc91391d236c2c554af7edd645c1d4571abce70019db579f4df8a94ca3/json HTTP/1.1" 200 585 (connectionpool.py:547, _make_request) 2025-11-13 09:20:18.101000 [ 671 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw0-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-11-13 09:20:18.102000 [ 671 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw0-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-11-13 09:20:18.146000 [ 671 ] DEBUG : Stdout:789 (cluster.py:145, run_and_check) 2025-11-13 09:20:18.146000 [ 671 ] DEBUG : Clickhouse process running. (cluster.py:4028, start_clickhouse) 2025-11-13 09:20:18.146000 [ 671 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw0-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-11-13 09:20:18.146000 [ 671 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw0-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-11-13 09:20:18.190000 [ 671 ] DEBUG : Stdout:789 (cluster.py:145, run_and_check) 2025-11-13 09:20:18.191000 [ 671 ] DEBUG : Executing query select 20 on node2 (cluster.py:3648, query) 2025-11-13 09:20:18.857000 [ 671 ] DEBUG : Executing query select 20 on node2 (cluster.py:3648, query) 2025-11-13 09:20:19.573000 [ 671 ] DEBUG : Executing query select 20 on node2 (cluster.py:3648, query) 2025-11-13 09:20:19.788000 [ 671 ] DEBUG : Executing query SELECT count() FROM system.processes WHERE (query_kind='Backup') AND (query LIKE '%22dd202a8cf748d6bbacf7a1422cff2e%') on node1 (cluster.py:3648, query) 2025-11-13 09:20:20.004000 [ 671 ] DEBUG : Executing query SELECT count() FROM system.processes WHERE (query_kind='Backup') AND (query LIKE '%22dd202a8cf748d6bbacf7a1422cff2e%') on node2 (cluster.py:3648, query) 2025-11-13 09:20:20.270000 [ 671 ] DEBUG : Executing query SELECT status FROM system.backups WHERE id='22dd202a8cf748d6bbacf7a1422cff2e' on node1 (cluster.py:3648, query) 2025-11-13 09:20:20.485000 [ 671 ] DEBUG : Executing query SELECT error FROM system.backups WHERE id='22dd202a8cf748d6bbacf7a1422cff2e' on node1 (cluster.py:3648, query) 2025-11-13 09:20:20.700000 [ 671 ] DEBUG : Executing query SYSTEM FLUSH LOGS on node1 (cluster.py:3648, query) 2025-11-13 09:20:21.317000 [ 671 ] DEBUG : Executing query SELECT status FROM system.backup_log WHERE id='22dd202a8cf748d6bbacf7a1422cff2e' ORDER BY status on node1 (cluster.py:3648, query) 2025-11-13 09:20:21.532000 [ 671 ] DEBUG : Executing query SELECT name FROM system.zookeeper WHERE path = '/clickhouse/backups' AND NOT (name == 'alive_tracker') on node1 (cluster.py:3648, query) 2025-11-13 09:20:21.748000 [ 671 ] DEBUG : Executing query SELECT name FROM system.errors WHERE last_error_time >= toDateTime('2025-11-13 09:20:07') AND NOT ((name == 'KEEPER_EXCEPTION') AND (last_error_message LIKE '%Fault injection%')) AND NOT (name == 'NO_ELEMENTS_IN_CONFIG') on node1 (cluster.py:3648, query) 2025-11-13 09:20:21.963000 [ 671 ] DEBUG : Executing query SELECT name, last_error_message FROM system.errors WHERE last_error_time >= toDateTime('2025-11-13 09:20:07') on node1 (cluster.py:3648, query) 2025-11-13 09:20:22.179000 [ 671 ] DEBUG : Executing query SELECT name FROM system.errors WHERE last_error_time >= toDateTime('2025-11-13 09:20:07') AND NOT ((name == 'KEEPER_EXCEPTION') AND (last_error_message LIKE '%Fault injection%')) AND NOT (name == 'NO_ELEMENTS_IN_CONFIG') on node2 (cluster.py:3648, query) 2025-11-13 09:20:22.395000 [ 671 ] DEBUG : Executing query SELECT name, last_error_message FROM system.errors WHERE last_error_time >= toDateTime('2025-11-13 09:20:07') on node2 (cluster.py:3648, query) ---------------------------- Captured log teardown ----------------------------- 2025-11-13 09:20:22.678000 [ 671 ] DEBUG : Executing query DROP TABLE IF EXISTS tbl ON CLUSTER 'cluster' SYNC on node1 (cluster.py:3648, query) 2025-11-13 09:20:22.994000 [ 671 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw0/.env --project-name roottestbackuprestoreonclustercancelbackup-gw0 --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw0/node2/docker-compose.yml stop --timeout 20] (cluster.py:121, run_and_check) 2025-11-13 09:20:24.172000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:20:24.172000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:20:24.172000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:20:24.172000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:20:24.172000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:20:24.173000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:20:24.173000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo3-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:20:24.173000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo3-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:20:24.173000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:20:24.173000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:20:24.173000 [ 671 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw0/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw0/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-11-13 09:20:24.181000 [ 671 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw0/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw0/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-11-13 09:20:24.190000 [ 671 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw0/.env --project-name roottestbackuprestoreonclustercancelbackup-gw0 --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw0/node2/docker-compose.yml down --volumes] (cluster.py:121, run_and_check) 2025-11-13 09:20:24.612000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node1-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node2-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node1-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-node2-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo3-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo2-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo1-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo3-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo3-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:20:24.613000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo2-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:20:24.614000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo1-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:20:24.614000 [ 671 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw0-zoo3-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:20:24.614000 [ 671 ] DEBUG : Stderr: Network roottestbackuprestoreonclustercancelbackup-gw0_default Removing (cluster.py:147, run_and_check) 2025-11-13 09:20:24.614000 [ 671 ] DEBUG : Stderr: Network roottestbackuprestoreonclustercancelbackup-gw0_default Removed (cluster.py:147, run_and_check) 2025-11-13 09:20:24.614000 [ 671 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-11-13 09:20:24.630000 [ 671 ] DEBUG : Docker networks for project roottestbackuprestoreonclustercancelbackup-gw0 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 09:20:24.648000 [ 671 ] DEBUG : Docker containers for project roottestbackuprestoreonclustercancelbackup-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 09:20:24.666000 [ 671 ] DEBUG : Docker volumes for project roottestbackuprestoreonclustercancelbackup-gw0 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 09:20:24.666000 [ 671 ] DEBUG : Command:[docker container list --all --filter name='^/roottestbackuprestoreonclustercancelbackup-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-11-13 09:20:24.682000 [ 671 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-11-13 09:20:24.682000 [ 671 ] DEBUG : No running containers for project: roottestbackuprestoreonclustercancelbackup-gw0 (cluster.py:879, cleanup) 2025-11-13 09:20:24.682000 [ 671 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-11-13 09:20:24.700000 [ 671 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-11-13 09:20:24.700000 [ 671 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-11-13 09:20:24.728000 [ 671 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-11-13 09:20:24.728000 [ 671 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-11-13 09:20:24.728000 [ 671 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-11-13 09:20:24.728000 [ 671 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-11-13 09:20:24.746000 [ 671 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-11-13 09:20:24.746000 [ 671 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) ----------------- generated report log file: parallel0_1.jsonl ----------------- ============================== slowest durations =============================== 39.05s call test_backup_restore_on_cluster/test_cancel_backup.py::test_cancel_restore 19.53s setup test_backup_restore_on_cluster/test_cancel_backup.py::test_cancel_restore 17.33s setup test_allowed_client_hosts/test.py::test_denied_host 15.77s call test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup 4.66s teardown test_allowed_client_hosts/test.py::test_denied_host 2.07s teardown test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup 0.52s call test_allowed_client_hosts/test.py::test_denied_host 0.37s teardown test_backup_restore_on_cluster/test_cancel_backup.py::test_cancel_restore 0.00s setup test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup =========================== short test summary info ============================ FAILED test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup PASSED test_allowed_client_hosts/test.py::test_denied_host PASSED test_backup_restore_on_cluster/test_cancel_backup.py::test_cancel_restore ==================== 1 failed, 2 passed in 88.80s (0:01:28) ==================== Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 492, in subprocess.check_call(cmd, shell=True, bufsize=0) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_s2h5km --privileged --dns-search='.' --memory=30709035008 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=5ccda723c1fc -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=d862517635bf -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e CLICKHOUSE_USE_OLD_ANALYZER=1 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=1 --color=no --durations=0 --report-log=parallel0_1.jsonl --report-log-exclude-logs-on-passed-tests test_allowed_client_hosts/test.py::test_denied_host test_backup_restore_on_cluster/test_cancel_backup.py::test_cancel_restore test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup -vvv " altinityinfra/integration-tests-runner:226bfaf75ac1 ' returned non-zero exit status 1.